init version#1
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds a full Astro-based documentation site, CI/CD workflows (Azure deploy and version monitor), shared TypeScript libraries for desktop version handling, many UI components and styles, extensive Chinese docs and image prompts, and hosting/cache configuration and assets for the docs site. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (22)
src/components/StructuredData.astro-7-28 (1)
7-28: 🛠️ Refactor suggestion | 🟠 MajorProps interface must be named
Propsfor Astro to inferAstro.propstypes.Astro resolves the component's prop types by looking for an interface (or type alias) literally named
Propsin the frontmatter. The current nameStructuredDataPropsmeansAstro.propsand its destructuring at line 30 are effectivelyany, so callers get no type-checking or IDE autocompletion.♻️ Rename to `Props`
-interface StructuredDataProps { +interface Props { type: 'WebPage' | 'Article' | 'BlogPosting' | 'Organization';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/StructuredData.astro` around lines 7 - 28, The interface currently named StructuredDataProps prevents Astro from inferring Astro.props; rename the interface to Props so Astro can pick up the types and enable proper type-checking/autocomplete for Astro.props and the component's destructured props (the interface fields like type, data, canonicalUrl, and nested author/publisher/image fields should remain unchanged).src/components/StructuredData.astro-33-38 (1)
33-38:⚠️ Potential issue | 🟠 Major
data.urlsilently overridescanonicalUrl— spread order is semantically backwards.Because
...dataappears afterurl: canonicalUrl, any caller that passesdata.urlwill overwrite the canonical URL. The prop namecanonicalUrlimplies it should be the authoritativeurlvalue in the output. Either givecanonicalUrlprecedence by spreadingdatafirst, or removeurlfrom thedatainterface to avoid the ambiguity entirely.🛠️ Option A — give `canonicalUrl` precedence (swap spread order)
const structuredData = { '@context': 'https://schema.org', '@type': type, + ...data, url: canonicalUrl, - ...data, };🛠️ Option B — remove `url` from `data` to eliminate the conflict
data: { name?: string; headline?: string; description: string; - url?: string; datePublished?: string;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/StructuredData.astro` around lines 33 - 38, The constructed object structuredData currently places ...data after url: canonicalUrl so any data.url will override the canonicalUrl; fix by giving canonicalUrl precedence—spread data first (i.e., ...data before url) so url: canonicalUrl wins—or alternatively remove url from the data prop/interface so callers cannot pass data.url; update the structuredData creation accordingly (referencing structuredData, canonicalUrl, data, and url).src/content/docs/blog/2026-01-28-利用-worker-threads-优化-vite-构建性能的实战.mdx-207-207 (1)
207-207:⚠️ Potential issue | 🟠 Major
new Worker('./worker.ts')won't work natively — Node.js Worker threads require compiled JS.The Node.js
worker_threadsWorkerconstructor accepts a.jsfile (or a data URL), not a.tssource file. Readers who copy this verbatim will get a runtime error. The article should note that the TypeScript file must first be compiled (e.g., viatsc,esbuild, or usingtsx/ts-nodewith a loader), or show the correct approach for a TypeScript-first project (e.g., passing a compiled output path or using a bundler-aware workaround).A common pattern in Vite plugins is to use
new URL('./worker.js', import.meta.url)pointing at the compiled output, or to inline the worker via a bundler plugin.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/blog/2026-01-28-利用-worker-threads-优化-vite-构建性能的实战.mdx` at line 207, The example uses new Worker('./worker.ts') which fails at runtime because Node's worker_threads expects compiled JS or a data URL; update the example and text to show a valid approach: explain that worker.ts must be compiled and demonstrate creating the Worker with the compiled output (e.g., new Worker(new URL('./worker.js', import.meta.url)) or a bundler/runtime toolchain like tsc/esbuild/tsx/ts-node) and mention alternatives (data URL or bundler inlining) so readers know to point Worker at a .js bundle or use a loader rather than the raw .ts file.src/content/docs/blog/2026-01-28-利用-worker-threads-优化-vite-构建性能的实战.mdx-185-242 (1)
185-242:⚠️ Potential issue | 🟠 Major
WorkerPoolexample has two bugs that would prevent it from working.Bug 1 —
activeWorkersis never declared. The class body only declaresworkersandtaskQueue, butthis.activeWorkersis used as aMap<Worker, ObfuscationTask>increateWorker,runTask, anddispatchTask. Any reader who copies this snippet will get aTypeError: Cannot read properties of undefinedat runtime.Bug 2 —
runTaskpromises never resolve. Theworker.on('message', ...)handler receives the worker'sresultbut never callsjob.resolve(result). Every caller ofrunTask()will hang indefinitely. The handler needs to look up the in-flight job for that worker and resolve it.🐛 Proposed fix for both bugs
export class WorkerPool { private workers: Worker[] = [] + private activeWorkers = new Map<Worker, { task: ObfuscationTask; resolve: (r: ObfuscationResult) => void; reject: (e: Error) => void }>() private taskQueue: Array<{ task: ObfuscationTask resolve: (result: ObfuscationResult) => void reject: (error: Error) => void }> = [] // ... private createWorker() { const worker = new Worker('./worker.ts') worker.on('message', (result: ObfuscationResult) => { + // Resolve the in-flight job for this worker + const job = this.activeWorkers.get(worker) + if (job) job.resolve(result) + const nextTask = this.taskQueue.shift() if (nextTask) { this.dispatchTask(worker, nextTask) } else { this.activeWorkers.delete(worker) } }) this.workers.push(worker) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/blog/2026-01-28-利用-worker-threads-优化-vite-构建性能的实战.mdx` around lines 185 - 242, The WorkerPool class is missing the activeWorkers map and the worker message handler never resolves the corresponding promise; add a private activeWorkers: Map<Worker, ObfuscationTask> (or Map<Worker, {task:ObfuscationTask, resolve, reject}>) to the WorkerPool class, initialize it in the constructor, and update createWorker, runTask, and dispatchTask to use that map; in createWorker's worker.on('message', (result) => { ... }) lookup the in-flight job from activeWorkers for that worker, call its resolve(result) (or resolve/reject on error), remove the worker from activeWorkers, and then pull the next task from taskQueue and dispatch it so promises returned by runTask actually resolve.src/content/docs/related-software-installation/postgresql/install-on-windows.md.backup-1-158 (1)
1-158:⚠️ Potential issue | 🟠 MajorRemove the
.backupfile from version control.Backup files (
.md.backup) are development artifacts that should not be tracked in git. Astro will never process this file (the.backupextension is not a recognized content format), so it contributes dead content to the repo. It also contains stale image paths (/img/install-postgres-windows/…) that diverge from the canonical.mdxcounterpart (../../img/installation/install-postgres-windows/…), making it a source of confusion.Add the extension to
.gitignoreand delete the file from the tree:🗑️ Suggested cleanup
# .gitignore +*.backup🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/related-software-installation/postgresql/install-on-windows.md.backup` around lines 1 - 158, Remove the tracked backup file src/content/docs/related-software-installation/postgresql/install-on-windows.md.backup from the repository (git rm --cached or git rm) and commit the deletion, and add a rule to .gitignore to ignore *.backup (or specifically install-on-windows.md.backup) so future backups aren’t committed; ensure there are no required changes to the canonical Markdown/MDX source (e.g., the real install-on-windows.md/.mdx) before deleting and include a short commit message like "chore: remove .backup dev artifact and ignore *.backup".src/content/docs/blog/2026-01-22-github-issues-集成.mdx-113-119 (1)
113-119:⚠️ Potential issue | 🟠 MajorSecurity section misleadingly omits XSS risk of
localStoragetoken storage.Line 118 states that Same-Origin Policy (SOP) ensures only HagiCode scripts can read the token. This is inaccurate in a relevant threat model: SOP only prevents cross-origin access. Any script injected into the same origin via XSS can freely read
localStorageand exfiltrate the GitHub token — potentially giving an attacker full repo write access.The section concludes (line 328) that "Token 不经过服务器数据库,降低了泄露风险" as a pure security gain, but trades a server-side DB leak risk for a client-side XSS risk. Readers implementing this pattern should be aware of mitigations such as:
- Clearing the token on tab/session close (sessionStorage instead of localStorage).
- Using a short-lived token with minimal scopes (see the scope comment below).
- Ensuring a strong Content Security Policy (CSP) to reduce XSS attack surface.
Consider adding a "注意事项" (caveats) paragraph about XSS to avoid giving readers a false sense of security.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/blog/2026-01-22-github-issues-集成.mdx` around lines 113 - 119, Update the "安全设计" section to remove the misleading claim that Same-Origin Policy (SOP) alone prevents token access and add a "注意事项" cave述 explicitly warning that any script running in the same origin (e.g., via XSS) can read localStorage and exfiltrate tokens; in the token storage bullet (the one that currently states "Token 仅存储在浏览器的 `localStorage` 中") change the wording to acknowledge XSS risk and add recommended mitigations: prefer sessionStorage for session-scoped tokens or clear tokens on tab close, use short-lived tokens with minimal scopes, and enforce a strong Content Security Policy (CSP); keep the rest of the security bullets intact but ensure the final sentence about reduced DB risk is balanced by this XSS caveat.src/content/docs/blog/2026-01-22-github-issues-集成.mdx-226-256 (1)
226-256:⚠️ Potential issue | 🟠 MajorMissing
Content-Typeheader and outdated Accept header in GitHub API requests.The
githubApifunction injects onlyAuthorizationandAcceptheaders, but POST requests sending JSON require explicitContent-Type: application/json. Without it, the GitHub API may reject the request with415 Unsupported Media Type. The function also uses the outdatedapplication/vnd.github.v3+jsonAccept header format; the current recommendation isapplication/vnd.github+json. Additionally, GitHub recommends explicitly pinning the API version via theX-GitHub-Api-Version: 2022-11-28header rather than relying on server defaults.Proposed fix
async function githubApi<T>(endpoint: string, options: RequestInit = {}): Promise<T> { const token = localStorage.getItem('gh_token'); if (!token) throw new Error('Not connected to GitHub'); const response = await fetch(`${GITHUB_API_BASE}${endpoint}`, { ...options, headers: { ...options.headers, Authorization: `Bearer ${token}`, - Accept: 'application/vnd.github.v3+json', // 指定 API 版本 + 'Accept': 'application/vnd.github+json', + 'Content-Type': 'application/json', + 'X-GitHub-Api-Version': '2022-11-28', }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/blog/2026-01-22-github-issues-集成.mdx` around lines 226 - 256, The githubApi function currently sets Authorization and an outdated Accept header and misses Content-Type for JSON bodies; update githubApi to merge headers from options.headers and: 1) replace Accept value with "application/vnd.github+json", 2) add "X-GitHub-Api-Version": "2022-11-28", and 3) when options.method is POST/PUT/PATCH (or when options.body is present and a stringified JSON is used by createIssue), ensure "Content-Type": "application/json" is set (without overwriting any explicit header provided by the caller); keep these changes localized to the githubApi function used by createIssue.src/components/StarlightHead.astro-13-14 (1)
13-14:⚠️ Potential issue | 🟠 Major
Astro.siteisundefinedwhensiteis not configured —new URL(pathname, undefined)throws at build time.Astro's
siteproperty returns aURL | undefinedand isundefinedif nositevalue is set inastro.config.*. Passingundefinedas the base tonew URL()throws aTypeError, crashing every page render.🛡️ Proposed fix — guard against undefined `Astro.site`
-const url = new URL(Astro.url.pathname, Astro.site); -const canonicalUrl = url.href; +const canonicalUrl = Astro.site + ? new URL(Astro.url.pathname, Astro.site).href + : Astro.url.href;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/StarlightHead.astro` around lines 13 - 14, Astro.site can be undefined so calling new URL(Astro.url.pathname, Astro.site) will throw; update the logic around Astro.url.pathname/ Ast ro.site to guard against undefined by only constructing the URL when Astro.site is present (use Astro.site as the base for new URL) and otherwise derive canonicalUrl from Astro.url.pathname (or a safe fallback string) — modify the code that assigns url/canonicalUrl (the new URL call and the const canonicalUrl) to check Astro.site first and produce a safe string when it's undefined.src/components/StarlightHead.astro-29-29 (1)
29-29:⚠️ Potential issue | 🟠 MajorChange
canonicalURLtocanonicalin the SEO component props.The
astro-seolibrary expects the prop namecanonical, notcanonicalURL. Using the wrong prop name will prevent the canonical<link>tag from being emitted, causing the library to silently fall back toAstro.url.href.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/StarlightHead.astro` at line 29, In the SEO component usage (the SEO element in StarlightHead.astro) replace the incorrect prop name canonicalURL with canonical so the library emits the canonical <link>; e.g., change canonicalURL={canonicalUrl} to canonical={canonicalUrl} and verify the canonicalUrl variable is defined/available in the component scope (adjust the variable name if needed).package.json-21-22 (1)
21-22:⚠️ Potential issue | 🟠 MajorMove
playwrightand@types/*todevDependencies.
playwright,@types/react, and@types/react-domare build/test-time concerns that should not inflate production installs.playwrightalone adds hundreds of MB.♻️ Proposed fix
"dependencies": { "@astrojs/mdx": "4.3.13", "@astrojs/partytown": "^2.1.4", "@astrojs/react": "^4.4.2", "@astrojs/sitemap": "^3.7.0", "@astrojs/starlight": "^0.37.4", - "@types/react": "^19.2.13", - "@types/react-dom": "^19.2.3", "astro": "^5.6.1", ... - "playwright": "^1.58.2", "react": "^19.2.4", "react-dom": "^19.2.4", ... + }, + "devDependencies": { + "@types/react": "^19.2.13", + "@types/react-dom": "^19.2.3", + "playwright": "^1.58.2" }Also applies to: 28-30
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 21 - 22, Dependencies list currently includes test/build-only packages (playwright, `@types/react`, `@types/react-dom` and other `@types` entries at 28-30); move these entries out of "dependencies" and into "devDependencies" in package.json so production installs don't include them—either cut the keys "playwright", "@types/react", "@types/react-dom" (and the other `@types` entries referenced) from the dependencies object and paste them under devDependencies, or run package manager commands like npm install --save-dev playwright `@types/react` `@types/react-dom` to update package.json accordingly.package.json-24-24 (1)
24-24:⚠️ Potential issue | 🟠 MajorPin
astro-link-validatorto a specific tag to get deterministic installs.The package itself recommends
npm install github:rodgtr1/astro-link-validator#v1.0.0to pin to a specific version. Without a tag, every freshnpm installresolves to whatever is on the default branch at that moment, making builds non-deterministic. The package's author notes it will eventually be published to npm for a cleaner update experience. Additionally, a link-validation tool is only needed at build time and should be indevDependencies.♻️ Proposed fix
- "astro-link-validator": "github:rodgtr1/astro-link-validator", + "astro-link-validator": "github:rodgtr1/astro-link-validator#v1.0.0",And move it to
devDependencies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 24, Update package.json so the GitHub dependency "astro-link-validator" is pinned to a specific tag (e.g., github:rodgtr1/astro-link-validator#v1.0.0) and move that entry from "dependencies" to "devDependencies"; locate the current "astro-link-validator" entry in package.json and replace the value with the tagged ref and relocate the key into the devDependencies object to ensure deterministic, build-time-only installs.package.json-33-33 (1)
33-33:⚠️ Potential issue | 🟠 MajorUpdate
rehype-rawto align with@astrojs/mdxdependencies.The project declares
rehype-raw@^3.0.0, but@astrojs/mdx@4.3.13(a direct dependency) requiresrehype-raw@^7.0.0. This version mismatch causes npm to install both versions:3.0.0at the root and7.0.0nested under@astrojs/mdxand@astrojs/markdown-remark. Version duplication can cause type conflicts and runtime errors during the rehype transform pipeline.♻️ Proposed fix
- "rehype-raw": "^3.0.0", + "rehype-raw": "^7.0.0",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 33, The project-level dependency "rehype-raw" is pinned to ^3.0.0 which conflicts with `@astrojs/mdx`@4.3.13's requirement of rehype-raw@^7.0.0; update the "rehype-raw" entry in package.json to ^7.0.0 (or a range compatible with `@astrojs/mdx`) so only one major version is installed, then regenerate the lockfile and reinstall (e.g., run npm install or yarn install) to ensure the lockfile and node_modules reflect the single compatible version.src/content/docs/quick-start/proposal-session.md.backup-1-417 (1)
1-417:⚠️ Potential issue | 🟠 MajorRemove this backup file from the repository.
proposal-session.md.backupis a copy ofproposal-session.mdx(identical content with the older absolute image paths). It won't be processed by Astro'sdocsLoader()but it pollutes the content directory and may confuse contributors about which file is authoritative. Delete it and, if needed, preserve it locally or reference the Git history.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/quick-start/proposal-session.md.backup` around lines 1 - 417, The file proposal-session.md.backup is an outdated duplicate of proposal-session.mdx that pollutes the docs and isn't processed by Astro; delete proposal-session.md.backup from the repo (remove the backup file entirely) and ensure any needed history is kept via git (or local backup) instead of keeping the .backup copy; confirm that all references and images still point to the canonical proposal-session.mdx after deletion.src/content/docs/quick-start/proposal-session.mdx-364-382 (1)
364-382:⚠️ Potential issue | 🟠 Major
:::重要提示is not a valid Starlight admonition type — the callout will not render with any styling.Starlight aside blocks "can be of type
note,tip,cautionordanger." The custom string重要提示is unrecognized and the block will render as unstyled plain text, causing the important instruction about committing archival planning files to lose all visual emphasis.Use a valid type and pass the Chinese label as a custom title:
🐛 Proposed fix
-:::重要提示 +:::caution[重要提示] **提交归档的规划文件** ... :::🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/quick-start/proposal-session.mdx` around lines 364 - 382, The admonition currently uses an invalid Starlight type `:::重要提示`; replace that block opener with a valid Starlight admonition type (one of `note`, `tip`, `caution`, or `danger`) and pass the Chinese label as the custom title so it renders with styling — e.g., change `:::重要提示` to `:::tip 重要提示` (or `:::note 重要提示`) while keeping the block content (the guidance about committing proposal.md, tasks.md, design.md, specs/) unchanged so the callout renders correctly; update the matching closing `:::` if present.src/components/ClarityDebug.astro-5-5 (1)
5-5:⚠️ Potential issue | 🟠 Major
CLARITY_DEBUGis not accessible viaimport.meta.envwithout theVITE_prefix.In Astro/Vite, only environment variables prefixed with
VITE_(orPUBLIC_in Astro) are exposed throughimport.meta.env.import.meta.env.CLARITY_DEBUGwill always beundefined. The AI summary and the siblingBaiduAnalytics.astrocomponent both use theVITE_prefix pattern.🐛 Proposed fix
-const clarityDebug = import.meta.env.CLARITY_DEBUG; +const clarityDebug = import.meta.env.VITE_CLARITY_DEBUG;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ClarityDebug.astro` at line 5, Replace the non-exposed env access in ClarityDebug.astro: the constant clarityDebug is currently reading import.meta.env.CLARITY_DEBUG (which is undefined in Vite/Astro); switch it to read the exposed variable import.meta.env.VITE_CLARITY_DEBUG (or PUBLIC_ prefix if you intend a public var) and update any downstream usage to tolerate string/undefined (e.g., parse/boolean-if-needed) so the component behaves the same when the env var is absent; specifically modify the declaration of clarityDebug in ClarityDebug.astro to reference VITE_CLARITY_DEBUG and ensure any conditional checks against clarityDebug remain correct.src/content/docs/installation/desktop.mdx.backup-1-292 (1)
1-292: 🛠️ Refactor suggestion | 🟠 MajorBackup file should not be committed to the repository.
desktop.mdx.backupappears to be a working copy backup alongside the actualdesktop.mdxfile. Committing backup files adds clutter and risks confusion about which version is canonical. Consider adding*.backupto.gitignoreand removing this file from the PR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/content/docs/installation/desktop.mdx.backup` around lines 1 - 292, The PR includes a backup file desktop.mdx.backup that should not be committed; remove the file from the branch (delete desktop.mdx.backup), add a global ignore rule like *.backup to .gitignore (or add desktop.mdx.backup to .gitignore if you prefer), commit the .gitignore change, and ensure the deleted backup is removed from the PR diff (use git rm --cached if you need to stop tracking an already committed backup before committing the .gitignore update)..github/workflows/version-monitor.yml-65-65 (1)
65-65:⚠️ Potential issue | 🟠 MajorPotential command injection via unsanitized
new_versionin shell contexts.
steps.monitor.outputs.new_versionoriginates from an external HTTP response (the version source URL). If that source is ever compromised, the version string is interpolated directly into shell commands (branch names, commit messages,git push, etc.) via${{ ... }}expression expansion, which happens before the shell runs. A malicious version string (e.g., containing$(evil)or backticks) could execute arbitrary commands.Consider sanitizing or validating the version string early in the workflow (e.g., assert it matches a semver regex) before using it in subsequent steps:
🛡️ Example validation step
+ - name: Validate version format + if: steps.monitor.outputs.update_needed == 'true' + run: | + VERSION="${{ steps.monitor.outputs.new_version }}" + if ! echo "$VERSION" | grep -Pq '^\d+\.\d+\.\d+(-[\w.]+)?$'; then + echo "::error::Invalid version format: $VERSION" + exit 1 + fiAlso applies to: 78-78, 86-86, 91-92, 149-149
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/version-monitor.yml at line 65, The workflow uses steps.monitor.outputs.new_version directly in shell-interpolated strings (e.g., BRANCH_NAME="version-update-${{ steps.monitor.outputs.new_version }}", commit messages, git push), which risks command injection; add an early validation/sanitization step that asserts steps.monitor.outputs.new_version matches a strict semver regex (or otherwise strips/escapes non-alphanumeric, dot, hyphen characters) and set a new output like validated_version or sanitized_version from that step; then replace all occurrences of steps.monitor.outputs.new_version (used in BRANCH_NAME, commit messages, git commands referenced in the diff) with the validated_version/sanitized_version to ensure only safe characters reach shell contexts and fail the job if validation does not pass.shared/src/desktop-context.tsx-147-165 (1)
147-165:⚠️ Potential issue | 🟠 MajorUnhandled promise rejection in the optimization path.
The
.then()call on Line 148 has no.catch()handler. IfgetDesktopVersionData()rejects, this results in an unhandled promise rejection, which in React strict mode or Node environments could crash the app or produce console warnings. TheloadDatafunction (Lines 107-143) correctly handles errors in itscatchblock, but this code path does not.Proposed fix
if (isDesktopVersionInitialized() && !serverData) { getDesktopVersionData().then((versionData) => { if (isMounted) { setData({ latest: versionData.latest, platforms: versionData.platforms, error: versionData.error, loading: false, stable: { latest: versionData.channels.stable.latest, all: versionData.channels.stable.all, }, beta: { latest: versionData.channels.beta.latest, all: versionData.channels.beta.all, }, }); } + }).catch((error) => { + if (isMounted) { + setData({ + latest: null, + platforms: [], + error: error instanceof Error ? error.message : 'Unknown error', + loading: false, + stable: { latest: null, all: [] }, + beta: { latest: null, all: [] }, + }); + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/src/desktop-context.tsx` around lines 147 - 165, The promise returned by getDesktopVersionData() inside the isDesktopVersionInitialized() branch is not handled for rejection; add a .catch() (or use async/await with try/catch) to that chain so any error sets state like the loadData() catch block does: call setData(...) to set loading:false and populate the error field (and sensible defaults for latest/platforms/stable/beta) when getDesktopVersionData() rejects, ensuring you only call setData when isMounted is true; reference getDesktopVersionData, isDesktopVersionInitialized, setData, and mirror the error handling in loadData.astro.config.mjs-56-59 (1)
56-59:⚠️ Potential issue | 🟠 MajorOverriding
import.meta.env.PRODmay conflict with Astro/Vite defaults.Astro and Vite already define
import.meta.env.PRODbased on the build mode (--mode production). Redefining it viavite.defineto depend onprocess.env.NODE_ENVcould cause mismatches — for example,astro buildsetsPROD=trueregardless ofNODE_ENV, but this override would set it tofalseifNODE_ENVisn't explicitly"production".Consider removing this override and letting Astro/Vite manage
PRODnatively.Proposed fix
define: { - 'import.meta.env.PROD': JSON.stringify( - process.env.NODE_ENV === 'production' - ), "import.meta.env.VITE_CLARITY_PROJECT_ID": JSON.stringify(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@astro.config.mjs` around lines 56 - 59, The config currently overrides 'import.meta.env.PROD' via the define block (the define object setting 'import.meta.env.PROD'), which can conflict with Astro/Vite's built-in mode handling; remove the explicit definition of 'import.meta.env.PROD' from the define object in astro.config.mjs so that Astro/Vite control PROD based on the build mode (or if you need a custom flag, use a different key like VITE_CUSTOM_PROD and reference that instead).src/components/StarlightHeader.astro-33-48 (1)
33-48:⚠️ Potential issue | 🟠 MajorMobile navigation gap: custom nav links and InstallButton inaccessible below 64rem.
The custom nav links and InstallButton are hidden below 64rem (line 164), while
.right-groupusesmd:sl-flex(~50rem). Between these breakpoints, the right group is visible but nav links are hidden. More critically, these custom nav items are not integrated into Starlight's default sidebar or mobile menu—they exist only in StarlightHeader and have no fallback on small screens. Users below 64rem cannot access the InstallButton or the three custom navigation links (首页, 博客, 技术支持群) anywhere else in the interface.Either expose these nav items in a mobile menu, or integrate them into Starlight's sidebar navigation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/StarlightHeader.astro` around lines 33 - 48, The right-group in StarlightHeader currently uses md:sl-flex so its container remains visible at ~50rem while the nav links use classes that hide them below 64rem, leaving InstallButton and navLinks inaccessible on many small screens; fix by either (A) moving or duplicating the navLinks and InstallButton into Starlight's sidebar/mobile menu API so they appear in the built-in mobile drawer (export or pass navLinks into the app/sidebar nav config used by Starlight), or (B) change the visibility classes on the <nav class="custom-nav-links"> and InstallButton so they become visible at the same breakpoint as .right-group (e.g., replace high-breakpoint hide classes with sl:flex or the matching sl-visible utility), and ensure external link props (link.external) and <Icon /> rendering remain unchanged; update StarlightHeader's navLinks usage and InstallButton placement accordingly so mobile users can access 首页, 博客, and 技术支持群.shared/src/version-manager.ts-145-166 (1)
145-166:⚠️ Potential issue | 🟠 MajorFailed fetches are retried on every subsequent call with no backoff.
When
fetchDesktopVersions()throws (line 145), the error path rejects pending promises and re-throws, but never setsthis.initializedorthis.data. The next call togetVersionData()will bypass all caches and attempt another fetch immediately. Under persistent network failures, every component mount triggers a new failing request — potentially causing request storms.Consider caching the error state (with a TTL) so that callers within a cooldown window get the cached error instead of triggering another fetch:
🛡️ Sketch of error caching with TTL
+ private lastErrorTime: number = 0; + private static ERROR_RETRY_MS = 30_000; // 30s cooldown + async getVersionData(): Promise<DesktopVersionData> { if (this.initialized && this.data) { return this.data; } + + // Return cached error within cooldown window + if (this.data?.error && Date.now() - this.lastErrorTime < VersionManager.ERROR_RETRY_MS) { + throw new Error(this.data.error); + } // ... existing fetch logic ... } catch (error) { const errorMessage = error instanceof Error ? error.message : 'Unknown error'; const errorData: DesktopVersionData = { /* ... */ }; + this.data = errorData; + this.lastErrorTime = Date.now(); // ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/src/version-manager.ts` around lines 145 - 166, When fetchDesktopVersions() fails, record and cache the error state so subsequent getVersionData() calls within a short TTL return the cached error instead of triggering immediate refetches: build an error DesktopVersionData (as in your errorData variable), assign it to this.data and set this.initialized = true, store a timestamp like this.dataErrorTimestamp (or similar), clear pendingPromises as you already do, and rethrow; then update getVersionData() to check if initialized and whether data contains an error whose timestamp is younger than the TTL—if so, return the cached error data immediately; ensure this.fetching is still set correctly in finally and that successful fetches clear the error timestamp.shared/src/version.ts-192-201 (1)
192-201:⚠️ Potential issue | 🟠 Major
semver.comparethrows on invalid version strings — add defensive handling.
semver.compare(cleaned1, cleaned2)throws aTypeErrorif either argument is not a valid semver. SincecompareVersionsis called inside a.reduce()ingetLatestVersionFromVersions, malformed version entries will crash the reduction. While the exception is caught by the outer try/catch ingetLatestVersion, the error handling is implicit and fragile.Validate versions using
semver.coerce()before comparing, consistent with the approach used inscripts/version-monitor.js:🛡️ Proposed fix
export function compareVersions(v1: string, v2: string): number { const cleaned1 = v1.replace(/^v/, ''); const cleaned2 = v2.replace(/^v/, ''); - const cmp = semver.compare(cleaned1, cleaned2); - // semver.compare returns: negative if a < b, 0 if equal, positive if a > b - if (cmp < 0) return -1; - if (cmp > 0) return 1; - return 0; + // coerce handles partial versions like "1.2" or "1" gracefully + const sv1 = semver.coerce(cleaned1); + const sv2 = semver.coerce(cleaned2); + + if (!sv1 && !sv2) return 0; + if (!sv1) return -1; + if (!sv2) return 1; + + return semver.compare(sv1.version, sv2.version); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/src/version.ts` around lines 192 - 201, compareVersions currently calls semver.compare directly and will throw if inputs are invalid; update compareVersions to coerce both inputs first (use semver.coerce on cleaned1 and cleaned2), then: if both coerced are null return 0, if only one coerces treat the valid one as greater (return 1 if v1 coerced and v2 not, -1 if v2 coerced and v1 not), otherwise call semver.compare on the coerced.version strings and map to -1/0/1 as before; this keeps compareVersions (used by getLatestVersionFromVersions/getLatestVersion) defensive against malformed version strings.
| run: | | ||
| echo "Checking for existing PR for version: ${CURRENT_VERSION}" | ||
| EXISTING_PR=$(gh pr list \ | ||
| --search "chore: update version to ${CURRENT_VERSION} in:head" \ | ||
| --state open \ | ||
| --json number,title \ | ||
| --jq '.[0]') | ||
|
|
||
| if [ -n "$EXISTING_PR" ]; then | ||
| PR_NUMBER=$(echo "$EXISTING_PR" | jq -r '.number') | ||
| PR_TITLE=$(echo "$EXISTING_PR" | jq -r '.title') | ||
| echo "pr_exists=true" >> $GITHUB_OUTPUT | ||
| echo "existing_pr_number=$PR_NUMBER" >> $GITHUB_OUTPUT | ||
| echo "existing_pr_url=$(gh pr view $PR_NUMBER --json url -q .url)" >> $GITHUB_OUTPUT | ||
| echo "Found existing PR: #$PR_NUMBER - $PR_TITLE" | ||
| else | ||
| echo "pr_exists=false" >> $GITHUB_OUTPUT | ||
| echo "No existing PR found for version ${CURRENT_VERSION}" | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub CLI gh pr list search qualifiers valid options
💡 Result:
gh pr list --search "<query>" accepts the same GitHub “issues & pull requests” search syntax (qualifiers) you use on GitHub.com. [1][2]
Common PR search qualifiers (valid in --search)
Scope / state
is:pr(often implied forgh pr list, but useful when combining terms) [2]state:open/state:closed(alsois:open/is:closed) [2]is:merged/is:unmerged[3]is:queued(in merge queue) [2]
Draft / review / CI
draft:true/draft:false(also commonlyis:draft) [4][5]review:none | required | approved | changes_requested[4][5]reviewed-by:USER,review-requested:USER,user-review-requested:@me``,team-review-requested:ORG/TEAM[4][5]status:success | failure | pending[5]
Branches / commits
head:BRANCHPREFIX,base:BRANCH[6]<SHA>(commit SHA prefix) [6]
People / metadata
author:USER(includingauthor:@me``,author:app/APPNAME) [2]assignee:USER,mentions:USER,commenter:USER,involves:USER,team:ORG/TEAM[2][5]label:"name",milestone:"name"[2]linked:issue(PRs linked to an issue they may close) [5]archived:true|false[3]
Dates / sorting / counts
merged:>=YYYY-MM-DD(ISO date; supports>,<, ranges, optional time) [3]comments:>N, pluscreated:,updated:,closed:(date qualifiers) [3]sort:created-asc(and othersort:options) [5]
Examples
- Needs review + CI green:
gh pr list --search "review:required status:success"[1][4][5] - Your review requests:
gh pr list --search "is:open user-review-requested:@me"[4] - Targeting
mainfrom feature branches:gh pr list --search "base:main head:feature-"[6] - Merged since Jan 1, 2026:
gh pr list --search "is:merged merged:>=2026-01-01"[3]
Tip: gh pr list also has dedicated flags (e.g., --author, --assignee, --base, --head, --draft) that overlap with some qualifiers. [1]
🌐 Web query:
GitHub search qualifier "in:title" "in:body" "in:comments" pull requests valid
💡 Result:
Yes—in:title, in:body, and in:comments are valid GitHub Issues & Pull Requests search qualifiers, and they work for pull requests as long as you restrict the search to PRs with is:pr (or type:pr). GitHub documents these under “Searching issues and pull requests” → “Search by the title, body, or comments.” [1]
Examples:
- Search PR titles only:
is:pr in:title "my phrase"[1] - Search PR bodies only:
is:pr in:body "my phrase"[1] - Search PR comments only:
is:pr in:comments "my phrase"[1] - Combine fields:
is:pr "my phrase" in:title,body[1]
Sources:
[1] GitHub Docs — “Searching issues and pull requests” (sections: “Search only issues or pull requests” and “Search by the title, body, or comments”). (docs.github.com)
Citations:
🌐 Web query:
GitHub search qualifier "in:head" pull requests
💡 Result:
in:head is not a valid GitHub qualifier for searching pull requests.
In issues/PR search, the in: qualifier only limits where text is searched (e.g. in:title, in:body, in:comments). [1]
To search PRs by branch, use these PR-specific qualifiers instead: [1]
- Head branch (source/from branch):
head:<branch> - Base branch (target/into branch):
base:<branch>
Examples:
- PRs opened from branch
feature/login(in any repo you can access):
is:pr head:feature/login - PRs opened into
mainin a specific repo:
repo:OWNER/REPO is:pr base:main
Sources
[1] GitHub Docs — “Searching issues and pull requests” (includes in: and head:/base: qualifiers) (docs.github.com)
Citations:
🏁 Script executed:
git ls-files .github/workflows/version-monitor.yml | head -1 | xargs wc -lRepository: HagiCode-org/docs
Length of output: 102
🏁 Script executed:
sed -n '120,140p' .github/workflows/version-monitor.ymlRepository: HagiCode-org/docs
Length of output: 985
in:head is not a valid GitHub search qualifier — fix to use in:title to prevent duplicate PRs.
The search qualifier in:head is not recognized by GitHub's search API. Valid in: qualifiers are limited to in:title, in:body, and in:comments. When using invalid qualifiers, GitHub treats them as literal search text rather than filters, causing the query to fail silently and never match existing PRs. This will cause the workflow to create duplicate version-update PRs.
Update both occurrences:
Line ~104
- EXISTING_PR=$(gh pr list \
- --search "chore: update version to ${CURRENT_VERSION} in:head" \
+ EXISTING_PR=$(gh pr list \
+ --search "chore: update version to ${CURRENT_VERSION} in:title" \
--state open \Line ~129
- PREVIOUS_PRS=$(gh pr list \
- --search "chore: update version to in:head" \
+ PREVIOUS_PRS=$(gh pr list \
+ --search "chore: update version to in:title" \
--state open \📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: | | |
| echo "Checking for existing PR for version: ${CURRENT_VERSION}" | |
| EXISTING_PR=$(gh pr list \ | |
| --search "chore: update version to ${CURRENT_VERSION} in:head" \ | |
| --state open \ | |
| --json number,title \ | |
| --jq '.[0]') | |
| if [ -n "$EXISTING_PR" ]; then | |
| PR_NUMBER=$(echo "$EXISTING_PR" | jq -r '.number') | |
| PR_TITLE=$(echo "$EXISTING_PR" | jq -r '.title') | |
| echo "pr_exists=true" >> $GITHUB_OUTPUT | |
| echo "existing_pr_number=$PR_NUMBER" >> $GITHUB_OUTPUT | |
| echo "existing_pr_url=$(gh pr view $PR_NUMBER --json url -q .url)" >> $GITHUB_OUTPUT | |
| echo "Found existing PR: #$PR_NUMBER - $PR_TITLE" | |
| else | |
| echo "pr_exists=false" >> $GITHUB_OUTPUT | |
| echo "No existing PR found for version ${CURRENT_VERSION}" | |
| fi | |
| run: | | |
| echo "Checking for existing PR for version: ${CURRENT_VERSION}" | |
| EXISTING_PR=$(gh pr list \ | |
| --search "chore: update version to ${CURRENT_VERSION} in:title" \ | |
| --state open \ | |
| --json number,title \ | |
| --jq '.[0]') | |
| if [ -n "$EXISTING_PR" ]; then | |
| PR_NUMBER=$(echo "$EXISTING_PR" | jq -r '.number') | |
| PR_TITLE=$(echo "$EXISTING_PR" | jq -r '.title') | |
| echo "pr_exists=true" >> $GITHUB_OUTPUT | |
| echo "existing_pr_number=$PR_NUMBER" >> $GITHUB_OUTPUT | |
| echo "existing_pr_url=$(gh pr view $PR_NUMBER --json url -q .url)" >> $GITHUB_OUTPUT | |
| echo "Found existing PR: #$PR_NUMBER - $PR_TITLE" | |
| else | |
| echo "pr_exists=false" >> $GITHUB_OUTPUT | |
| echo "No existing PR found for version ${CURRENT_VERSION}" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/version-monitor.yml around lines 100 - 118, The GH search
uses an invalid qualifier "in:head" causing missed matches; update the --search
argument used in the gh pr list command to replace "in:head" with "in:title"
(e.g., change the search string "chore: update version to ${CURRENT_VERSION}
in:head" to "chore: update version to ${CURRENT_VERSION} in:title"), and make
the same replacement for the other occurrence of that search pattern elsewhere
in the workflow so existing version PRs are correctly detected.
| "overrides": [ | ||
| { | ||
| "route": "/_astro/*", | ||
| "headers": { | ||
| "cache-control": "public, max-age=31536000, immutable" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.jpg", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.jpeg", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.png", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.gif", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.ico", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.svg", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.webp", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.woff", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.woff2", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.ttf", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.eot", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| }, | ||
| { | ||
| "route": "/*.otf", | ||
| "headers": { | ||
| "cache-control": "public, max-age=86400" | ||
| } | ||
| } | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat public/staticwebapp.config.jsonRepository: HagiCode-org/docs
Length of output: 1758
🌐 Web query:
Azure Static Web Apps staticwebapp.config.json routes overrides configuration schema
💡 Result:
The JSON Schema for Azure Static Web Apps staticwebapp.config.json is published on SchemaStore:
- Schema URL (for
$schema):https://json.schemastore.org/staticwebapp.config.json(redirects tohttps://www.schemastore.org/staticwebapp.config.json). [1]
routes (route rules) shape
routes is an array of route-rule objects. Each item requires:
route(string, required) – route pattern (wildcards supported)- Optional:
methods(array of HTTP verbs)allowedRoles(array of strings)headers(object)rewrite(string)redirect(string)statusCode(integer)
These align with Microsoft’s routing docs (rules evaluated top-to-bottom, first match wins). [1][2]
responseOverrides (error response overrides) shape
responseOverrides is an object keyed by status code strings (e.g. "401", "404"). Each status code maps to an object supporting:
rewrite(string)redirect(string)statusCode(integer)
Microsoft documents responseOverrides for overriding common error codes (400/401/403/404) with rewrites/redirects. [1][2]
Sources: [1][2]
🌐 Web query:
Azure Static Web Apps staticwebapp.config.json official schema routes globalOverrides
💡 Result:
Official staticwebapp.config.json JSON Schema (for editor validation / IntelliSense)
Use the SchemaStore schema URL in your config’s $schema field: https://json.schemastore.org/staticwebapp.config.json. [1] (json.schemastore.org)
Example:
{
"$schema": "https://json.schemastore.org/staticwebapp.config.json",
"routes": [],
"globalHeaders": {},
"responseOverrides": {}
}[1] (json.schemastore.org)
routes
routesis an array of rules; rules are evaluated in order and stop at the first match. [2] (learn.microsoft.com)- Common route rule properties include
route, optionalmethods, and then actions likerewrite/redirect, plusheaders,statusCode,allowedRoles. [2] (learn.microsoft.com)
“globalOverrides” (what it actually is)
There isn’t a top-level property named globalOverrides in the official schema; the “global” behaviors are mainly:
globalHeaders: headers applied to every static response unless a route overrides the same header name. [2] (learn.microsoft.com)responseOverrides: per-status-code override behavior (custom rewrite/redirect/statusCode) for errors like 400/401/403/404. [2] (learn.microsoft.com)
Key names in the official schema: routes, globalHeaders, responseOverrides, navigationFallback, mimeTypes, auth, networking, forwardingGateway, platform, trailingSlash. [1] (json.schemastore.org)
Citations:
- 1: https://json.schemastore.org/staticwebapp.config.json
- 2: https://json.schemastore.org/staticwebapp.config.json
- 3: https://learn.microsoft.com/en-us/azure/static-web-apps/configuration
- 4: https://learn.microsoft.com/en-us/azure/static-web-apps/configuration
- 5: https://learn.microsoft.com/en-us/azure/static-web-apps/configuration
- 6: https://learn.microsoft.com/en-us/azure/static-web-apps/configuration
- 7: https://json.schemastore.org/staticwebapp.config.json
🌐 Web query:
Azure Static Web Apps route pattern wildcard matching /**/ vs /* semantics
💡 Result:
Azure Static Web Apps routing does not use “glob” semantics like /**/ (globstar). In staticwebapp.config.json, route patterns are basically exact paths or a single trailing * wildcard (plus the documented “file extension” wildcard forms). Wildcards are only supported at the end of the route path. [1]
What /* means in SWA
/*is a trailing-wildcard rule at the site root.- Like the docs’ example
/calendar/*, it matches requests under that path prefix (here: anything that starts with/and has something after it), and—by the same logic—it won’t match the bare folder itself (for root that’s/). [1]
If you need / too, add an explicit "/" (or "/index.html") rule before /*.
What /**/ means (or doesn’t)
/**/is not a supported pattern in Azure Static Web Apps routing. There’s no special “match any number of path segments” operator, and you can’t put wildcard “in the middle” to mean “any directories here”. SWA only documents wildcards at the end. [1]- More generally, SWA route patterns are limited enough that attempts to use multiple wildcards/more complex globbing tend to fail (for example, users report configs breaking when trying patterns with more than one
*). [2]
Practical equivalents
- Match everything under a prefix (and subpaths):
"/admin/*"(or match/admintoo:"/admin*"). [1] - SPA catch-all: use
navigationFallback.rewritewithexcludepatterns rather than trying to express/**/-style matching. [1]
🌐 Web query:
Azure Static Web Apps nested asset directories image cache routing pattern example
💡 Result:
Example staticwebapp.config.json that matches images in nested asset directories and applies long-lived caching (works for paths like /assets/img/logo.png and /assets/img/icons/ui/close.webp):
{
"routes": [
{
"route": "/assets/*.{png,jpg,jpeg,gif,svg,webp,avif,ico}",
"headers": {
"Cache-Control": "public, max-age=31536000, immutable"
}
}
]
}Notes:
- In Azure Static Web Apps, a route like
/assets/*matches requests under/assets/(including deeper paths), and you can filter by extension using*.{...}(multi-extension glob). [1] - Routes are evaluated top-to-bottom; the first match wins, so keep asset-cache rules above broader rewrites/fallbacks. [1]
Sources: [1]
🏁 Script executed:
find . -type f \( -name "*.jpg" -o -name "*.png" -o -name "*.gif" -o -name "*.webp" \) | head -20Repository: HagiCode-org/docs
Length of output: 1254
Fix overrides → routes and globalOverrides → globalHeaders; rewrite extension patterns
There are three problems with this configuration:
1. Wrong top-level keys.
routes and globalHeaders are the correct property names in Azure SWA; overrides and globalOverrides are not recognized and will be silently ignored. Change "overrides" to "routes" and "globalOverrides" to "globalHeaders".
2. Extension-only patterns don't match subdirectory assets.
Patterns like /*.jpg only match files at the root (e.g., /photo.jpg). Your assets are nested under /img/, /img/installation/, /img/home/, etc., so these patterns will not apply. Note: Azure SWA does not support /**/ globstar patterns—only single trailing wildcards are allowed.
3. Proposed solution.
Instead of per-extension rules, use a prefix-based pattern. Replace the list of extension-specific routes with a single rule like "/img/*" to cover all assets under /img/ and its subdirectories.
🔧 Proposed fix
- "globalOverrides": {
+ "globalHeaders": {
"headers": {
"cache-control": "public, max-age=0, must-revalidate"
}
},
- "overrides": [
+ "routes": [
{
- "route": "/_astro/*",
+ "route": "/img/*",
"headers": {
- "cache-control": "public, max-age=31536000, immutable"
+ "cache-control": "public, max-age=86400"
}
},
{
- "route": "/*.jpg",
+ "route": "/_astro/*",
"headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.jpeg",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.png",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.gif",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.ico",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.svg",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.webp",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.woff",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.woff2",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.ttf",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.eot",
- "headers": {
- "cache-control": "public, max-age=86400"
- }
- },
- {
- "route": "/*.otf",
- "headers": {
- "cache-control": "public, max-age=86400"
+ "cache-control": "public, max-age=31536000, immutable"
}
}
]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@public/staticwebapp.config.json` around lines 10 - 89, The config uses the
wrong top-level keys and restrictive patterns: replace the "overrides" array
with "routes" and any "globalOverrides" with "globalHeaders", then remove
per-extension routes like "/*.jpg", "/*.png", etc. and replace them with
prefix-based route(s) such as "/img/*" (and other top-level prefixes you serve)
so assets in subdirectories match; update the header objects under the new
"routes" entries so the same cache-control values are applied to those prefix
routes.
| // For versions array, find the latest version | ||
| if (Array.isArray(data.versions) && data.versions.length > 0) { | ||
| logger.info(`Found ${data.versions.length} versions in array`); | ||
|
|
||
| let latestVersion = data.versions[0]; | ||
| let latestVersionObj = data.versions[0]; | ||
|
|
||
| for (const versionObj of data.versions) { | ||
| const v1 = versionObj.version || versionObj; | ||
| const v2 = latestVersionObj.version || latestVersionObj; | ||
|
|
||
| const comparison = compareVersions(v1, v2); | ||
|
|
||
| if (comparison === 1) { | ||
| latestVersion = v1; | ||
| latestVersionObj = versionObj; | ||
| } | ||
| } | ||
|
|
||
| const isBeta = latestVersion.includes('beta') || latestVersion.includes('alpha') || latestVersion.includes('rc'); | ||
|
|
||
| logger.info(`Latest version from array: ${latestVersion} (${isBeta ? 'beta' : 'stable'})`); | ||
| return { version: latestVersion, channel: isBeta ? 'beta' : 'stable', source: 'versions array (auto-detected)' }; | ||
| } |
There was a problem hiding this comment.
Bug: latestVersion may remain an object, causing .includes() to throw.
latestVersion is initialized to data.versions[0] (Line 182), which could be an object like { version: "1.0.0", ... }. If the first element happens to be the latest (the loop never replaces it), latestVersion is still an object when .includes() is called on Line 197, which throws TypeError: latestVersion.includes is not a function.
Proposed fix
if (Array.isArray(data.versions) && data.versions.length > 0) {
logger.info(`Found ${data.versions.length} versions in array`);
- let latestVersion = data.versions[0];
+ let latestVersion = data.versions[0].version || data.versions[0];
let latestVersionObj = data.versions[0];
for (const versionObj of data.versions) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // For versions array, find the latest version | |
| if (Array.isArray(data.versions) && data.versions.length > 0) { | |
| logger.info(`Found ${data.versions.length} versions in array`); | |
| let latestVersion = data.versions[0]; | |
| let latestVersionObj = data.versions[0]; | |
| for (const versionObj of data.versions) { | |
| const v1 = versionObj.version || versionObj; | |
| const v2 = latestVersionObj.version || latestVersionObj; | |
| const comparison = compareVersions(v1, v2); | |
| if (comparison === 1) { | |
| latestVersion = v1; | |
| latestVersionObj = versionObj; | |
| } | |
| } | |
| const isBeta = latestVersion.includes('beta') || latestVersion.includes('alpha') || latestVersion.includes('rc'); | |
| logger.info(`Latest version from array: ${latestVersion} (${isBeta ? 'beta' : 'stable'})`); | |
| return { version: latestVersion, channel: isBeta ? 'beta' : 'stable', source: 'versions array (auto-detected)' }; | |
| } | |
| // For versions array, find the latest version | |
| if (Array.isArray(data.versions) && data.versions.length > 0) { | |
| logger.info(`Found ${data.versions.length} versions in array`); | |
| let latestVersion = data.versions[0].version || data.versions[0]; | |
| let latestVersionObj = data.versions[0]; | |
| for (const versionObj of data.versions) { | |
| const v1 = versionObj.version || versionObj; | |
| const v2 = latestVersionObj.version || latestVersionObj; | |
| const comparison = compareVersions(v1, v2); | |
| if (comparison === 1) { | |
| latestVersion = v1; | |
| latestVersionObj = versionObj; | |
| } | |
| } | |
| const isBeta = latestVersion.includes('beta') || latestVersion.includes('alpha') || latestVersion.includes('rc'); | |
| logger.info(`Latest version from array: ${latestVersion} (${isBeta ? 'beta' : 'stable'})`); | |
| return { version: latestVersion, channel: isBeta ? 'beta' : 'stable', source: 'versions array (auto-detected)' }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/version-monitor.js` around lines 178 - 201, The bug is that
latestVersion is sometimes set to an object (data.versions[0]) so calling
latestVersion.includes(...) can throw; change initialization and assignments so
latestVersion always holds the version string: set latestVersion =
data.versions[0].version || data.versions[0], use the same extraction when
updating inside the loop (use v1 and v2 already computed), and when
returning/using latestVersion ensure you use the string value (or derive it from
latestVersionObj.version) before calling includes or returning the
version/channel; keep compareVersions, latestVersionObj, and data.versions logic
but ensure latestVersion is consistently a string.
| // 如果设置隐藏广告,则不渲染 | ||
| if (hideAd) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/components/BlogHeaderAd.astroRepository: HagiCode-org/docs
Length of output: 4866
🌐 Web query:
Astro component conditional rendering return null frontmatter documentation
💡 Result:
In Astro, you don’t conditionally render by return null from the frontmatter (the frontmatter is just the component’s server-only script). Instead, you conditionally render in the template using JSX-like expressions:
- Logical AND (
&&) (renders nothing when the condition is falsy):{happy && <p>...</p>}(docs.astro.build) - Ternary (
? :), where the “else” can benullto render nothing:{cond ? <Thing /> : null}(docs.astro.build)
If you meant an Astro page (src/pages/*.astro) and tried to return null from the frontmatter: pages can only return a Response (e.g., return new Response(...) or return Astro.redirect(...)), not null. (docs.astro.build)
Citations:
- 1: https://docs.astro.build/en/tutorial/2-pages/3/
- 2: https://docs.astro.build/en/tutorial/2-pages/3/
- 3: https://docs.astro.build/en/reference/errors/only-response-can-be-returned/?utm_source=openai
return null in component frontmatter is not valid — the hideAd prop will not work.
Astro components don't support return statements in the script section for conditional rendering. This pattern only applies to page endpoints returning a Response, and even then only Response objects are allowed, not null. The ad will render regardless of the hideAd value.
Wrap the template markup in a conditional instead:
-// 如果设置隐藏广告,则不渲染
-if (hideAd) {
- return null;
-}
-
const { glmCoding } = GLM_PROMO_LINKS;
---
+{!hideAd && (
<div class="blog-header-ad">
...
</div>
+)}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/BlogHeaderAd.astro` around lines 15 - 18, The frontmatter in
BlogHeaderAd.astro uses "if (hideAd) { return null; }", which is invalid in
Astro component scripts and means hideAd doesn't prevent rendering; remove the
return-based conditional and instead wrap the component's template markup in a
conditional check that uses the hideAd prop (e.g., render the ad block only when
hideAd is falsy), ensuring any variables used in the template (like hideAd)
remain in the frontmatter export so the template can read them.
| <div style="display: none;" data-clarity-debug="{JSON.stringify({ | ||
| clarityProjectId, | ||
| isProduction, | ||
| clarityDebug, | ||
| shouldLoad: isProduction && clarityProjectId, | ||
| })}"></div> |
There was a problem hiding this comment.
Bug: Astro expression inside quoted attribute is rendered as literal text.
The {JSON.stringify(...)} is wrapped in double quotes, so Astro treats it as a literal string — the output will be the raw source text {JSON.stringify({...})}, not the evaluated JSON.
Remove the surrounding quotes to use an Astro expression:
🐛 Proposed fix
-<div style="display: none;" data-clarity-debug="{JSON.stringify({
- clarityProjectId,
- isProduction,
- clarityDebug,
- shouldLoad: isProduction && clarityProjectId,
-})}"></div>
+<div style="display: none;" data-clarity-debug={JSON.stringify({
+ clarityProjectId,
+ isProduction,
+ clarityDebug,
+ shouldLoad: isProduction && clarityProjectId,
+})}></div>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <div style="display: none;" data-clarity-debug="{JSON.stringify({ | |
| clarityProjectId, | |
| isProduction, | |
| clarityDebug, | |
| shouldLoad: isProduction && clarityProjectId, | |
| })}"></div> | |
| <div style="display: none;" data-clarity-debug={JSON.stringify({ | |
| clarityProjectId, | |
| isProduction, | |
| clarityDebug, | |
| shouldLoad: isProduction && clarityProjectId, | |
| })}></div> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/ClarityDebug.astro` around lines 14 - 19, The
data-clarity-debug attribute is currently a quoted string so the Astro
expression {JSON.stringify(...)} is rendered literally; change the attribute to
use an Astro expression instead (remove the surrounding quotes and use the
curly-brace expression form) on the div that sets data-clarity-debug so it
evaluates JSON.stringify({ clarityProjectId, isProduction, clarityDebug,
shouldLoad: isProduction && clarityProjectId }) at render time.
| const jsonLd = JSON.stringify(structuredData); | ||
| --- | ||
|
|
||
| <script type="application/ld+json" set:html={jsonLd} /> |
There was a problem hiding this comment.
JSON.stringify does not escape </script>, enabling HTML injection via set:html.
If any string value in data (e.g. description, name) contains </script>, the raw output injected by set:html will prematurely close the enclosing <script> tag, breaking the page and potentially enabling XSS. This is the classic JSON-LD embedding pitfall.
Replace angle brackets (and & for completeness) with their Unicode escapes after serialization:
🔒 Proposed fix — safe JSON-LD serialization
-const jsonLd = JSON.stringify(structuredData);
+// Escape characters that are unsafe inside an HTML <script> block.
+const jsonLd = JSON.stringify(structuredData)
+ .replace(/&/g, '\\u0026')
+ .replace(/</g, '\\u003c')
+ .replace(/>/g, '\\u003e');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/StructuredData.astro` around lines 41 - 44, The JSON-LD string
created by jsonLd = JSON.stringify(structuredData) can contain raw characters
like </script> enabling HTML injection when used with set:html; after
serializing structuredData, sanitize the string by replacing &, < and > with
their Unicode escape sequences (e.g. replace & -> \u0026, < -> \u003c, > ->
\u003e) so the final jsonLd contains no raw angle brackets before it is injected
in the <script type="application/ld+json" set:html={jsonLd} /> element; update
the jsonLd assignment to perform this replacement (refer to the jsonLd variable
and structuredData input) so the JSON-LD remains valid JSON but cannot
prematurely close the script tag.
| options.UseNpgsql(databaseSettings.ConnectionString, npgsqlOptions => | ||
| { | ||
| // 开启 JSONB 支持,这在处理 AI 对话记录时非常有用 | ||
| npgsqlOptions.UseJsonNet(); | ||
| }); | ||
|
|
||
| // 配置连接池重连策略 | ||
| options.EnableRetryOnFailure(3); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
head -100 "src/content/docs/blog/2026-02-01-net-core-双数据库实战优雅融合-postgresql-与-sqlite-的最佳实践.mdx" | tail -50Repository: HagiCode-org/docs
Length of output: 1258
🌐 Web query:
Npgsql EF Core EnableRetryOnFailure API DbContextOptionsBuilder
💡 Result:
To enable EF Core connection resiliency (retries) with the Npgsql provider, configure it on the UseNpgsql(...) options builder:
services.AddDbContext<MyDbContext>(options =>
options.UseNpgsql(connString, npgsql =>
npgsql.EnableRetryOnFailure()));EnableRetryOnFailure is on NpgsqlDbContextOptionsBuilder and has overloads such as:
options.UseNpgsql(connString, npgsql =>
npgsql.EnableRetryOnFailure(
maxRetryCount: 5,
maxRetryDelay: TimeSpan.FromSeconds(30),
errorCodesToAdd: null));Sources: Npgsql EF Core “Execution Strategy” docs [1] and API reference for NpgsqlDbContextOptionsBuilder.EnableRetryOnFailure overloads [2].
EnableRetryOnFailure is called on the wrong object — it won't compile.
Line 81 calls options.EnableRetryOnFailure(3) on the outer DbContextOptionsBuilder<ApplicationDbContext>. EnableRetryOnFailure must be called inside the UseNpgsql options callback on NpgsqlDbContextOptionsBuilder. Move it into the npgsqlOptions lambda:
🐛 Proposed fix
options.UseNpgsql(databaseSettings.ConnectionString, npgsqlOptions =>
{
npgsqlOptions.UseJsonNet();
+ npgsqlOptions.EnableRetryOnFailure(3);
});
-
-// 配置连接池重连策略
-options.EnableRetryOnFailure(3);Additionally, if databaseSettings is null (config section absent), the null-conditional in the condition databaseSettings?.DbType?.ToLower() == "sqlite" evaluates to null and execution falls into the else branch, where databaseSettings.ConnectionString is accessed without a null guard — throwing a NullReferenceException. Add an early guard:
if (databaseSettings is null)
throw new InvalidOperationException("Database configuration section is missing.");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@src/content/docs/blog/2026-02-01-net-core-双数据库实战优雅融合-postgresql-与-sqlite-的最佳实践.mdx`
around lines 74 - 82, Move the EnableRetryOnFailure(3) call into the Npgsql
options callback so it is invoked on the NpgsqlDbContextOptionsBuilder (i.e.
call npgsqlOptions.EnableRetryOnFailure(3) inside the UseNpgsql(npgsqlOptions =>
{ ... }) lambda alongside npgsqlOptions.UseJsonNet()); also add an early null
guard for databaseSettings (check if databaseSettings is null and throw an
InvalidOperationException with a clear message) before using
databaseSettings.ConnectionString so the else branch cannot dereference a null
settings object; update references in the method that configures
ApplicationDbContext accordingly.
|
|
||
| **优势**: AI 不仅生成代码,还解释设计思路和复杂度分析。 | ||
|
|
||
| import { LinkCard, CardGrid } from '@astrojs/starlight/components'; |
There was a problem hiding this comment.
Import statement placed mid-document — must be moved to the top after frontmatter.
In MDX (and Astro's MDX processing), import statements must appear at the top of the file, immediately after the frontmatter block. Placing the import on line 538, deep within the prose, will likely cause a build error or silently fail to resolve the components, resulting in <LinkCard> and <CardGrid> not rendering on lines 546–594.
🐛 Proposed fix
Move the import to the top of the file, right after the frontmatter:
---
title: Hagicode 产品概述
description: 从产品定位、核心能力、技术架构和使用场景等多个维度,全面了解 Hagicode 这款 AI 驱动的代码开发辅助工具。
---
+
+import { LinkCard, CardGrid } from '@astrojs/starlight/components';
本概述文档将帮助您从产品定位、核心能力、技术架构和使用场景等多个维度,全面了解 Hagicode 这款 AI 驱动的代码开发辅助工具。And remove line 538:
-import { LinkCard, CardGrid } from '@astrojs/starlight/components';
-
## 从这里开始🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/content/docs/product-overview.mdx` at line 538, The import for the
Starlight components (LinkCard, CardGrid) is currently placed mid-document and
must be moved immediately after the MDX frontmatter; locate the lone import line
"import { LinkCard, CardGrid } from '@astrojs/starlight/components';" (currently
around where LinkCard/CardGrid are later used) and cut it from there, then paste
it directly after the frontmatter block at the top of the file so the components
resolve when used in the document and remove the original mid-document import
line.
| npm --version | ||
| ``` | ||
|
|
||
| 如果未安装或版本过低,请参考[安装指南](/installation)中的相关软件安装说明。 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# List the structure of docs directories
echo "=== Documentation directory structure ==="
find src/content/docs -type d -name "installation*" | head -20
echo -e "\n=== Files in installation directory ==="
find src/content/docs/installation -type f \( -name "*.md" -o -name "*.mdx" \) 2>/dev/null | sort
echo -e "\n=== Checking for index files ==="
find src/content/docs/installation -name "index.*" 2>/dev/null
echo -e "\n=== All docs at top level ==="
ls -la src/content/docs/ | grep -E "^d"Repository: HagiCode-org/docs
Length of output: 782
🏁 Script executed:
#!/bin/bash
# Check the astro config or similar to understand routing
echo "=== Looking for config files that might define routes ==="
find . -maxdepth 2 -type f \( -name "astro.config.*" -o -name "tsconfig.*" \) 2>/dev/null
echo -e "\n=== Checking file at the line in question ==="
sed -n '45,60p' src/content/docs/related-software-installation/openspec/setup-openspec.mdxRepository: HagiCode-org/docs
Length of output: 465
Update the link from /installation to a specific installation guide route.
The /installation route does not exist. The installation directory contains only desktop.mdx and docker-compose.mdx, which create routes /installation/desktop and /installation/docker-compose respectively. Change the link to /installation/desktop or another specific guide depending on the intended target.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/content/docs/related-software-installation/openspec/setup-openspec.mdx`
at line 52, The markdown link currently points to a non-existent route
"/installation"; update the href to a concrete installation page (e.g., change
"/installation" to "/installation/desktop" or "/installation/docker-compose"
depending on which guide is intended) by editing the link text in the
setup-openspec.mdx line that contains the "/installation" URL so it targets the
correct route such as "/installation/desktop".
| items: posts.map(post => ({ | ||
| title: post.data.title, | ||
| link: `${basePath}/${post.id}/`, // 使用 id 而不是 slug |
There was a problem hiding this comment.
Double-slash bug in generated RSS links.
With basePath always returning '/', the link template `${basePath}/${post.id}/` produces //blog/2024-01-01-post/. A leading // is interpreted by browsers as a protocol-relative URL (e.g., https://blog/2024-01-01-post/), which is broken.
🐛 Proposed fix
items: posts.map(post => ({
title: post.data.title,
- link: `${basePath}/${post.id}/`,
+ link: `/${post.id}/`,
pubDate: post.data.date,Or if you want to keep basePath for future flexibility, trim the trailing slash:
- link: `${basePath}/${post.id}/`,
+ link: `${basePath.replace(/\/$/, '')}/${post.id}/`,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| items: posts.map(post => ({ | |
| title: post.data.title, | |
| link: `${basePath}/${post.id}/`, // 使用 id 而不是 slug | |
| items: posts.map(post => ({ | |
| title: post.data.title, | |
| link: `/${post.id}/`, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/pages/blog/rss.xml.ts` around lines 27 - 29, The generated RSS link uses
the template `${basePath}/${post.id}/` which produces a double-slash when
basePath is `'/'`; fix by normalizing basePath before building links (e.g.,
remove a trailing slash or default basePath to empty for the root) so
posts.map's link value uses a single slash; update the code that computes
basePath or the link construction in posts.map to trim trailing slashes from
basePath (reference: basePath, posts.map, link template) so links become
`/blog/2024-01-01-post/` instead of `//blog/...`.
Summary by CodeRabbit
New Features
Documentation
Chores